First, I’m gonna guess that this gets killthreaded soon.
UFAI: Unless you tell me that you love big brother, I will create a million copies of you and torture each one for a million years.
Human: Piffle. In the next second, my quantum wave function will split into far more than a million copies. Therefore, the probability that I will find myself in one of your acausal copies and not a natural, causal copy, is negligible.
Er, you really don’t understand what you’re talking about here. The simplest way to point this out is that if the UFAI follows through, then a million copies of you get simulated in every Everett branch along with your original.
EDIT: OK, serves me right for reading something dumb and replying before reading the objection and counterargument; but I think your argument still fails. I can see how you could think that all that matters for subjective anticipation is the sheer number of distinct copies with nontrivial quantum measure, but thinking about long-tail probability distributions should convince you that that wouldn’t add up to normality. (To say nothing of the inherent sorites problem of distinguishing between branches of a continuous configuration space.) The AI’s threat is valid.
I had another reply relating to killthread, but I can’t tell if it was deleted purposefully or by some computer glitch. If this reply is not deleted, I will eventually edit it to include my prior reply. And if it is, I’m no danger; I really do believe the human’s line of logic in this post, and so I’m not inclined to spend any extra time fighting for my right to express what I consider an interesting-but-non-fundamental truth.
I don’t really know much about killthread. I suppose I could make the same points using heaven instead of hell, but that wouldn’t really change things for anyone with a modicum of insight.
The point is that I am describing a chain of logic that does not lead to being afraid of acausal threats. I think the overall logic is valid (though in the bits with parallel argumentation, the non-key points are more tentative). But say that someone with killthread capacity is afraid of people being afraid of acausal threats. Even if they believe my logic is invalid—unless it were obviously or abrasively so—there would be no reason I can see for them to killthread me. They just wouldn’t point out what they saw as my errors.
I think there’s sufficient reason for a moratorium on discussion of acausal threat scenarios, at least for the time being until peoples’ imaginations settle down again.
I can see that, if this line of though could be the downfall of humanity, it’s best to just avoid it altogether. But that cat is out of the bag; even if one wishes it weren’t, it’s not at all clear that fighting that isn’t counterproductive. And the “scenario” I pose is more of an antiscenario. The motivation is ludicrous (big brother?) and the entire purpose is to demonstrate the impossibility and pointlessness of acausal threats.
If the universe is finite, then the configuration space isn’t actually continuous (if by that you mean uncountable). I’m not saying that the universe is pixelated in some simplistic sense, just that the overall timeless wave function contains a finite amount of information. So I don’t see a sorities problem.
I don’t understand your other objection as fully (“thinking about long-tail probability distributions should convince you that that wouldn’t add up to normality”). Still, I suspect that if all we’re dealing with are finite (though unimaginably huge) quantities, it is not a problem either.
I don’t understand your other objection as fully (“thinking about long-tail probability distributions should convince you that that wouldn’t add up to normality”).
I’m saying that your objection doesn’t add up to the Born probabilities. Say you set up an observation such that it can have one result with 90% probability (and the device lights up with a 0), or a large number of other results with dwindling probabilities attached (the device lights up with some positive integer). Note that in the system with you observing this, there wouldn’t be a reason for the first state to branch any faster thereafter than any of the others.
Your suggested anthropic probabilities would say that, since there are many “more” distinct versions of you that see numbers greater than 0 than versions of you that see 0, you should expect seeing 0 to be very unlikely. But this is just wrong.
The configurations corresponding to copies of you have to be weighted by measure, and the simplest extrapolation of subjective anticipation is to say that the futures of ordinary physical-you add up as equivalent to one copy of you instantiated in each such branch, even should that copy be identical across branches.
EDIT: Actually, on further reflection, I may have misunderstood you. If you’re talking about a finite version like Hanson’s Mangled Worlds, where you can count identical configurations by multiplicity, then you’re at least not violating the Born probabilities. But then, it seems clear that you should count computer-simulated identical copies by Everett multiplicity as well, which it appears you’re not.
To me, probability is observed reality, and the irrelevance of multiple identical/fully-isomorphic copies is a philosophical given. The state of our knowledge is certainly not large enough to disallow that conjunction.
Push me for details, and I’m less sure. I suspect that once you’re inside the 90% side of the wave function, you actually do branch faster; I’m certainly not aware of any mathematical demonstrations that this isn’t so, even within our current incomplete quantum understanding. It could also be that probability only appears to work because consciousness quickly ceases to exist in those branches in which it’s violated on a large scale, though there are obvious problems with that line of argument.
Anyway, if you accept these two postulates—one observationally, and the other philosophically—then the human’s logic works.
Hmmm… you’re right. Looking inside the box does have some effect—it makes the wave function coherent, thus not allowing funky quantum effects which don’t also include the observer. However, it’s not obvious that this reduces the measure of the output in any sense. But it could.
So basically, I abandon that line of argument. Seems to me that the more important bit in that section is that any simplifying tricks which reduce the computational complexity of the simulation, also reduce the quantum branching factor. So there is still a significant range of computational power that UFAI could have which would be effectively omnipotent in any human sense, and even capable of simulation with trivial effort, but still unable to get a significant measure of quantum torture relative to normal reality. (As a separate point, though I’m certain that there are deterministic algorithms for [simulated] human intelligence, it’s quite possible that there are no quantum algorithms which are not just “build the human”.)
(Partly, of course, my own estimate of the computational power of AI is probably much lower than the general Less Wrong ideas on that. I have several arguments on this, but aside from the human’s argument near the end, they’re irrelevant here; and besides which, they’re suspect of being rationalizations.)
I haven’t read the banned post, but have seen some of the fallout. My point is that acausal anything is not particularly alluring/threatening to a non-deterministic rationality on a quantum substrate. (And this argument does not at all require that the rationality depends on the quantum nature of the substrate, only that it interacts intimately with it.)
Would recasting my argument in terms of heaven, not hell, get it un-banned?
It’s pretty infuriating to be banned for inscrutable reasons. I really could make a number of arguments on all different levels why struggling to keep some purely-abstract cat in some bag is pointless and silly. I don’t even believe in your cat. Real bags for real cats.
It is, however, voted down to −3, which might be below the default threshold for hiding articles. (I don’t remember the default, nor if threshold hiding applies to one’s own posts.)
Homunq — if this is correct, perhaps you mistook that for the post having been banned? I have my account set to show all posts and comments regardless of rating, and I do see this post in the Recent Posts sidebar.
Actually, I didn’t. But the point is that I don’t see UFAIs having retroactive acausal power over me—for good or ill. And while you are certainly impressive and not friendly, if that is supposed to be a threat, you’re certainly not in the class of near-omnipotent UFAIs able to credibly make that kind of threat.
I don’t think that Clippy was making that point. I think Clippy’s point was that as UFAI go, paperclippers are some of the ones which are less likely to make things unpleasant for humans.
Gödel showed that you can encode any axiomatic system into the natural numbers, and thus provability becomes just a relationship between the mind-bogglingly-large number representing the axioms and the one representing the theorem. I
I’m not sure this is true. I don’t think a Gödel encoding can be done for any axiomatic system. For example, an axiomatic system with uncountably many axioms cannot necessarily be encoded in this way. For most natural systems that have uncountably many axioms you can get around this if the axioms occur in a regular enough fashion, but this isn’t true for all systems. For example, consider ZF but instead of the standard axiom schema of specification, I only include the schema for some very ugly uncountable collection of predicates. This axiomatic system cannot in general be represented by a Gödel numbering. To see this, note that there are at most |P(N)| Godel numbering systems and the cardinality of the collection of single variable predicates in ZF is much larger.
(Disclaimer: I do number theory not model theory or logic so there could be something wrong with this argument.)
My mathematical logic is rusty, but if I’m not mistaken, the relevant criterion is that the set of axioms must be recursively enumerable, i.e. either finite or with a countable number of axioms that are generated using some Turing-equivalent sort of axiom schemas.
A countable set of axioms can be non-recursively-enumerable (i.e. there’s no Turing machine that generates its members, and nothing else, as output). Such sets of axioms clearly cannot be Goedelized, since there are uncountably many of them—whereas for recursively enumerable ones, it’s only necessary to enumerate the Turing machines that generate them.
I think recursively enumerable is sufficient but not necessary. To see an example, consider Peano Arithmetic with added axioms as follows: Pick some lexographic ordering of all well-formed formula in PA, and denote this system by P_0. Define Pn for n>1 by running through your list of statements until you come to one that isn’t provable or disprovable in P(n-1). When you do, throw it in as true with the axioms of P_(n-1) to get P_n. Consider then the system P_infinity = the union of all the P_i. This system has only finitely many axioms, is Godel numerable by most definitions of that term but is not recursively enumerable (since if it were we’d have a decision procedure for PA.)
Yes, you’re right, of course. In my above comment, I failed to consider that not only Turing machines can be Goedelized, but also various infinite-step procedures that produce non-computable results, such as the one you outlined above.
(Also, I assume you meant “countably,” not “finitely” in the last sentence.)
Yes, I do think that only countable things exist. I believe that the TOE and the universe are natural numbers. (Well, of course, there is a countable infinity of isomorphic universes, but in terms of existence, I count those as just one.)
The actual contents of the universe are thus countable (though eternally looping in a complex temperospatial topology).
I thought that saying “natural” number was enough to make these points. Sorry.
I don’t think a Gödel encoding can be done for any axiomatic system. For example, an axiomatic system with uncountably many axioms cannot necessarily be encoded in this way.
My mathematical logic is rusty, but if I’m not mistaken, the same holds for a set of axioms that’s countable, but non-computable.
Well, any self-respecting cult has to figure out the conditions under which an individual will suffer an infinite amount of torture. This is going to look great in the recruiting material.
If this is supposed to be related to Roko’s banned post, it isn’t, because Roko wasn’t necessarily talking about acausal torture, but about acausal decision theory… the torture might very well be causal torture of the real you.
Is the moral of this that threats of simulated torture don’t matter because a UFAI will almost certainly be powerful enough to just torture the real you anyways?
First, I’m gonna guess that this gets killthreaded soon.
Er, you really don’t understand what you’re talking about here. The simplest way to point this out is that if the UFAI follows through, then a million copies of you get simulated in every Everett branch along with your original.
EDIT: OK, serves me right for reading something dumb and replying before reading the objection and counterargument; but I think your argument still fails. I can see how you could think that all that matters for subjective anticipation is the sheer number of distinct copies with nontrivial quantum measure, but thinking about long-tail probability distributions should convince you that that wouldn’t add up to normality. (To say nothing of the inherent sorites problem of distinguishing between branches of a continuous configuration space.) The AI’s threat is valid.
I had another reply relating to killthread, but I can’t tell if it was deleted purposefully or by some computer glitch. If this reply is not deleted, I will eventually edit it to include my prior reply. And if it is, I’m no danger; I really do believe the human’s line of logic in this post, and so I’m not inclined to spend any extra time fighting for my right to express what I consider an interesting-but-non-fundamental truth.
I don’t really know much about killthread. I suppose I could make the same points using heaven instead of hell, but that wouldn’t really change things for anyone with a modicum of insight.
The point is that I am describing a chain of logic that does not lead to being afraid of acausal threats. I think the overall logic is valid (though in the bits with parallel argumentation, the non-key points are more tentative). But say that someone with killthread capacity is afraid of people being afraid of acausal threats. Even if they believe my logic is invalid—unless it were obviously or abrasively so—there would be no reason I can see for them to killthread me. They just wouldn’t point out what they saw as my errors.
I think there’s sufficient reason for a moratorium on discussion of acausal threat scenarios, at least for the time being until peoples’ imaginations settle down again.
I can see that, if this line of though could be the downfall of humanity, it’s best to just avoid it altogether. But that cat is out of the bag; even if one wishes it weren’t, it’s not at all clear that fighting that isn’t counterproductive. And the “scenario” I pose is more of an antiscenario. The motivation is ludicrous (big brother?) and the entire purpose is to demonstrate the impossibility and pointlessness of acausal threats.
Anti-acausals, I am your ally.
The Net interprets censorship as damage and routes around it.
If the universe is finite, then the configuration space isn’t actually continuous (if by that you mean uncountable). I’m not saying that the universe is pixelated in some simplistic sense, just that the overall timeless wave function contains a finite amount of information. So I don’t see a sorities problem.
I don’t understand your other objection as fully (“thinking about long-tail probability distributions should convince you that that wouldn’t add up to normality”). Still, I suspect that if all we’re dealing with are finite (though unimaginably huge) quantities, it is not a problem either.
I’m saying that your objection doesn’t add up to the Born probabilities. Say you set up an observation such that it can have one result with 90% probability (and the device lights up with a 0), or a large number of other results with dwindling probabilities attached (the device lights up with some positive integer). Note that in the system with you observing this, there wouldn’t be a reason for the first state to branch any faster thereafter than any of the others.
Your suggested anthropic probabilities would say that, since there are many “more” distinct versions of you that see numbers greater than 0 than versions of you that see 0, you should expect seeing 0 to be very unlikely. But this is just wrong.
The configurations corresponding to copies of you have to be weighted by measure, and the simplest extrapolation of subjective anticipation is to say that the futures of ordinary physical-you add up as equivalent to one copy of you instantiated in each such branch, even should that copy be identical across branches.
(I’m not quite comfortable with this; the question of what I ought to expect if 2 identical copies are run together troubles me. But the above at least explains my objection to your argument, I hope.)
EDIT: Actually, on further reflection, I may have misunderstood you. If you’re talking about a finite version like Hanson’s Mangled Worlds, where you can count identical configurations by multiplicity, then you’re at least not violating the Born probabilities. But then, it seems clear that you should count computer-simulated identical copies by Everett multiplicity as well, which it appears you’re not.
To me, probability is observed reality, and the irrelevance of multiple identical/fully-isomorphic copies is a philosophical given. The state of our knowledge is certainly not large enough to disallow that conjunction.
Push me for details, and I’m less sure. I suspect that once you’re inside the 90% side of the wave function, you actually do branch faster; I’m certainly not aware of any mathematical demonstrations that this isn’t so, even within our current incomplete quantum understanding. It could also be that probability only appears to work because consciousness quickly ceases to exist in those branches in which it’s violated on a large scale, though there are obvious problems with that line of argument.
Anyway, if you accept these two postulates—one observationally, and the other philosophically—then the human’s logic works.
The human talks about Everett branches, but later mentions “collaps[ing] the wave function”. What’s up with that?
Hmmm… you’re right. Looking inside the box does have some effect—it makes the wave function coherent, thus not allowing funky quantum effects which don’t also include the observer. However, it’s not obvious that this reduces the measure of the output in any sense. But it could.
So basically, I abandon that line of argument. Seems to me that the more important bit in that section is that any simplifying tricks which reduce the computational complexity of the simulation, also reduce the quantum branching factor. So there is still a significant range of computational power that UFAI could have which would be effectively omnipotent in any human sense, and even capable of simulation with trivial effort, but still unable to get a significant measure of quantum torture relative to normal reality. (As a separate point, though I’m certain that there are deterministic algorithms for [simulated] human intelligence, it’s quite possible that there are no quantum algorithms which are not just “build the human”.)
(Partly, of course, my own estimate of the computational power of AI is probably much lower than the general Less Wrong ideas on that. I have several arguments on this, but aside from the human’s argument near the end, they’re irrelevant here; and besides which, they’re suspect of being rationalizations.)
Not sure what’s the point of this post. Roko’s original scenario didn’t require acausal torture.
I haven’t read the banned post, but have seen some of the fallout. My point is that acausal anything is not particularly alluring/threatening to a non-deterministic rationality on a quantum substrate. (And this argument does not at all require that the rationality depends on the quantum nature of the substrate, only that it interacts intimately with it.)
The original scenario wasn’t about “acausal anything” in your sense of the word. It didn’t use simulations. Just sayin’.
Would recasting my argument in terms of heaven, not hell, get it un-banned?
It’s pretty infuriating to be banned for inscrutable reasons. I really could make a number of arguments on all different levels why struggling to keep some purely-abstract cat in some bag is pointless and silly. I don’t even believe in your cat. Real bags for real cats.
What do you mean? Your post isn’t banned.
It is, however, voted down to −3, which might be below the default threshold for hiding articles. (I don’t remember the default, nor if threshold hiding applies to one’s own posts.)
Homunq — if this is correct, perhaps you mistook that for the post having been banned? I have my account set to show all posts and comments regardless of rating, and I do see this post in the Recent Posts sidebar.
Kind of puts things in perspective, doesn’t it?
And I just destroyed a paperclip.
Actually, I didn’t. But the point is that I don’t see UFAIs having retroactive acausal power over me—for good or ill. And while you are certainly impressive and not friendly, if that is supposed to be a threat, you’re certainly not in the class of near-omnipotent UFAIs able to credibly make that kind of threat.
I don’t think that Clippy was making that point. I think Clippy’s point was that as UFAI go, paperclippers are some of the ones which are less likely to make things unpleasant for humans.
I’m not sure this is true. I don’t think a Gödel encoding can be done for any axiomatic system. For example, an axiomatic system with uncountably many axioms cannot necessarily be encoded in this way. For most natural systems that have uncountably many axioms you can get around this if the axioms occur in a regular enough fashion, but this isn’t true for all systems. For example, consider ZF but instead of the standard axiom schema of specification, I only include the schema for some very ugly uncountable collection of predicates. This axiomatic system cannot in general be represented by a Gödel numbering. To see this, note that there are at most |P(N)| Godel numbering systems and the cardinality of the collection of single variable predicates in ZF is much larger.
(Disclaimer: I do number theory not model theory or logic so there could be something wrong with this argument.)
Eh, that’s a nitpick. Author could have just said ‘countable’ or even ‘finite’.
My mathematical logic is rusty, but if I’m not mistaken, the relevant criterion is that the set of axioms must be recursively enumerable, i.e. either finite or with a countable number of axioms that are generated using some Turing-equivalent sort of axiom schemas.
A countable set of axioms can be non-recursively-enumerable (i.e. there’s no Turing machine that generates its members, and nothing else, as output). Such sets of axioms clearly cannot be Goedelized, since there are uncountably many of them—whereas for recursively enumerable ones, it’s only necessary to enumerate the Turing machines that generate them.
I think recursively enumerable is sufficient but not necessary. To see an example, consider Peano Arithmetic with added axioms as follows: Pick some lexographic ordering of all well-formed formula in PA, and denote this system by P_0. Define Pn for n>1 by running through your list of statements until you come to one that isn’t provable or disprovable in P(n-1). When you do, throw it in as true with the axioms of P_(n-1) to get P_n. Consider then the system P_infinity = the union of all the P_i. This system has only finitely many axioms, is Godel numerable by most definitions of that term but is not recursively enumerable (since if it were we’d have a decision procedure for PA.)
Yes, you’re right, of course. In my above comment, I failed to consider that not only Turing machines can be Goedelized, but also various infinite-step procedures that produce non-computable results, such as the one you outlined above.
(Also, I assume you meant “countably,” not “finitely” in the last sentence.)
Yes, I do think that only countable things exist. I believe that the TOE and the universe are natural numbers. (Well, of course, there is a countable infinity of isomorphic universes, but in terms of existence, I count those as just one.)
The actual contents of the universe are thus countable (though eternally looping in a complex temperospatial topology).
I thought that saying “natural” number was enough to make these points. Sorry.
JoshuaZ:
My mathematical logic is rusty, but if I’m not mistaken, the same holds for a set of axioms that’s countable, but non-computable.
Lesswrong: All torture, all the time.
Well, any self-respecting cult has to figure out the conditions under which an individual will suffer an infinite amount of torture. This is going to look great in the recruiting material.
Lesswrong: All torture, all the time.
If this is supposed to be related to Roko’s banned post, it isn’t, because Roko wasn’t necessarily talking about acausal torture, but about acausal decision theory… the torture might very well be causal torture of the real you.
Is the moral of this that threats of simulated torture don’t matter because a UFAI will almost certainly be powerful enough to just torture the real you anyways?
One of the morals, yes. (Though I’d debate “almost certainly”. Safer would be “the power to torture acausally implies the power to torture causally”.)
Not always and everywhere: you could be causally safe from an AI whose future lightcone doesn’t intersect your own, but not acausally safe.
For practical purposes, you’re almost certainly right.
I believe the spelling is Boltzmann, not Bolzman.
Corrected, thanks.